content element
ai.txt: A Domain-Specific Language for Guiding AI Interactions with the Internet
Li, Yuekang, Song, Wei, Zhu, Bangshuo, Gong, Dong, Liu, Yi, Deng, Gelei, Chen, Chunyang, Ma, Lei, Sun, Jun, Walsh, Toby, Xue, Jingling
We introduce ai.txt, a novel domain-specific language (DSL) designed to explicitly regulate interactions between AI models, agents, and web content, addressing critical limitations of the widely adopted robots.txt standard. As AI increasingly engages with online materials for tasks such as training, summarization, and content modification, existing regulatory methods lack the necessary granularity and semantic expressiveness to ensure ethical and legal compliance. ai.txt extends traditional URL-based access controls by enabling precise element-level regulations and incorporating natural language instructions interpretable by AI systems. To facilitate practical deployment, we provide an integrated development environment with code autocompletion and automatic XML generation. Furthermore, we propose two compliance mechanisms: XML-based programmatic enforcement and natural language prompt integration, and demonstrate their effectiveness through preliminary experiments and case studies. Our approach aims to aid the governance of AI-Internet interactions, promoting responsible AI use in digital ecosystems.
- North America > United States (0.47)
- North America > Canada > Alberta (0.14)
- Asia > China (0.04)
- (6 more...)
- Law (1.00)
- Government > Regional Government (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
Generation of reusable learning objects from digital medical collections: An analysis based on the MASMDOA framework
Buendía, Félix, Gayoso-Cabada, Joaquín, Sierra, José-Luis
Learning Objects represent a widespread approach to structuring instructional materials in a large variety of educational contexts. The main aim of this work consists of analyzing from a qualitative point of view the process of generating reusable learning objects (RLOs) followed by Clavy, a tool that can be used to retrieve data from multiple medical knowledge sources and reconfigure such sources in diverse multimedia-based structures and organizations. From these organizations, Clavy is able to generate learning objects which can be adapted to various instructional healthcare scenarios with several types of user profiles and distinct learning requirements. Moreover, Clavy provides the capability of exporting these learning objects through educational standard specifications, which improves their reusability features. The analysis insights highlight the importance of having a tool able to transfer knowledge from the available digital medical collections to learning objects that can be easily accessed by medical students and healthcare practitioners through the most popular e-learning platforms.
- Europe > Spain > Valencian Community > Valencia Province > Valencia (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
- North America > United States > Wisconsin > Milwaukee County > Milwaukee (0.04)
- (3 more...)
- Instructional Material > Course Syllabus & Notes (1.00)
- Research Report (0.82)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Education > Educational Setting > Online (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.50)
PostDoc: Generating Poster from a Long Multimodal Document Using Deep Submodular Optimization
Jaisankar, Vijay, Bandyopadhyay, Sambaran, Vyas, Kalp, Chaitanya, Varre, Somasundaram, Shwetha
A poster from a long input document can be considered as a one-page easy-to-read multimodal (text and images) summary presented on a nice template with good design elements. Automatic transformation of a long document into a poster is a very less studied but challenging task. It involves content summarization of the input document followed by template generation and harmonization. In this work, we propose a novel deep submodular function which can be trained on ground truth summaries to extract multimodal content from the document and explicitly ensures good coverage, diversity and alignment of text and images. Then, we use an LLM based paraphraser and propose to generate a template with various design aspects conditioned on the input content. We show the merits of our approach through extensive automated and human evaluations.
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- Europe > Germany > Berlin (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (3 more...)
Disentangling Learnable and Memorizable Data via Contrastive Learning for Semantic Communications
Chaccour, Christina, Saad, Walid
Achieving artificially intelligent-native wireless networks is necessary for the operation of future 6G applications such as the metaverse. Nonetheless, current communication schemes are, at heart, a mere reconstruction process that lacks reasoning. One key solution that enables evolving wireless communication to a human-like conversation is semantic communications. In this paper, a novel machine reasoning framework is proposed to pre-process and disentangle source data so as to make it semantic-ready. In particular, a novel contrastive learning framework is proposed, whereby instance and cluster discrimination are performed on the data. These two tasks enable increasing the cohesiveness between data points mapping to semantically similar content elements and disentangling data points of semantically different content elements. Subsequently, the semantic deep clusters formed are ranked according to their level of confidence. Deep semantic clusters of highest confidence are considered learnable, semantic-rich data, i.e., data that can be used to build a language in a semantic communications system. The least confident ones are considered, random, semantic-poor, and memorizable data that must be transmitted classically. Our simulation results showcase the superiority of our contrastive learning approach in terms of semantic impact and minimalism. In fact, the length of the semantic representation achieved is minimized by 57.22% compared to vanilla semantic communication systems, thus achieving minimalist semantic representations.
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- North America > United States > Utah > Salt Lake County > Salt Lake City (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
Recommender systems with deep learning architectures
This post adresses the general problem of constructing a deep learning based recommender system. The particular architecture discribed in the paper is the one powering the new smart feed of the iki service, pushing your skills on daily basis -- to check its performance, please try product beta. If you feel familiar with the general idea of recommender systems, mainstream approaches and would like to go straight to the details of our solution, please skip first two sections of the paper. Recommender systems have changed the way we interact with lots of services. Instead of providing static data they bring interactive experience, an option to leave your feedback and to personalise the information you are given.